68 research outputs found

    Music Emotion Recognition: Intention of Composers-Performers Versus Perception of Musicians, Non-Musicians, and Listening Machines

    Get PDF

    “Give me happy pop songs in C major and with a fast tempo”: A vocal assistant for content-based queries to online music repositories

    Get PDF
    This paper presents an Internet of Musical Things system devised to support recreational music-making, improvisation, composition, and music learning via vocal queries to an online music repository. The system involves a commercial voice-based interface and the Jamendo cloud-based repository of Creative Commons music content. Thanks to the system the user can query the Jamendo music repository by six content-based features and each combination thereof: mood, genre, tempo, chords, key and tuning. Such queries differ from the conventional methods for music retrieval, which are based on the piece's title and the artist's name. These features were identified following a survey with 112 musicians, which preliminary validated the concept underlying the proposed system. A user study with 20 musicians showed that the system was deemed usable, able to provide a satisfactory user experience, and useful in a variety of musical activities. Differences in the participants’ needs were identified, which highlighted the need for personalization mechanisms based on the expertise level of the user. Importantly, the system was seen as a concrete solution to physical encumbrances that arise from the concurrent use of the instrument and devices providing interactive media resources. Finally, the system offers benefits to visually-impaired musicians

    Musical Haptic Wearables for Synchronisation of Visually-impaired Performers: a Co-design Approach

    Get PDF
    The emergence of new technologies is providing opportunities to develop novel solutions that facilitate the integration of visually-impaired people in different activities of our daily life, including collective music making. This paper presents a study conducted with visually-impaired music performers, which involved a participatory approach to the design of accessible technologies for musical communication in group playing. We report on three workshops that were conducted together with members of an established ensemble of solely visually-impaired musicians. The first workshop focused on the identification of the participants’ needs during the activity of playing in groups and how technology could satisfy such needs. The second and third workshops investigated, respectively, the activities of choir singing and instrument playing in ensemble, focusing on the key issue of synchronisation that was identified in the first workshop. The workshops involved prototypes of musical haptic wearables, which were co-designed and evaluated by the participants. Overall, results indicate that wireless tactile communication represents a promising avenue to cater effectively to the needs of visually-impaired performers

    Co-design of a Smart Cajon

    Get PDF
    The work of Luca Turchet is supported by a Marie-Curie Individual fellowship of European Union’s Horizon 2020 research and innovation program, under grant agreement No. 749561. Mathieu Barthet also acknowledges support from the EU H2020 Audio Commons grant (688382)

    Co-Design of Musical Haptic Wearables for Electronic Music Performer's Communication

    Get PDF

    To hear or not to hear: Sound Availability Modulates Sensory-Motor Integration

    Get PDF
    When we walk in place with our eyes closed after a few minutes of walking on a treadmill, we experience an unintentional forward body displacement (drift), called the sensory-motor aftereffect. Initially, this effect was thought to be due to the mismatch experienced during treadmill walking between the visual (absence of optic flow signaling body steadiness) and proprioceptive (muscle spindles firing signaling body displacement) information. Recently, the persistence of this effect has been shown even in the absence of vision, suggesting that other information, such as the sound of steps, could play a role. To test this hypothesis, six cochlear-implanted individuals were recruited and their forward drift was measured before (Control phase) and after (Post Exercise phase) walking on a treadmill while having their cochlear system turned on and turned off. The relevance in testing cochlear-implanted individuals was that when their system is turned off, they perceive total silence, even eliminating the sounds normally obtained from bone conduction. Results showed the absence of the aftereffect when the system was turned off, underlining the fundamental role played by sounds in the control of action and breaking new ground in the use of interactive sound feedback in motor learning and motor development

    Embodied Interactions with E-Textiles and the Internet of Sounds for Performing Arts

    Get PDF
    This paper presents initial steps towards the design of an embedded system for body-centric sonic performance. The proposed prototyping system allows performers to manipulate sounds through gestural interactions captured by textile wearable sensors. The e-textile sensor data control, in real-time, audio synthesis algorithms working with content from Audio Commons, a novel web-based ecosystem for repurposing crowd-sourced audio. The system enables creative embodied music interactions by combining seamless physical e-textiles with web-based digital audio technologies

    Internet of Musical things: Visit and Challenges

    Get PDF

    Bom uso do e-mail corporativo.

    Get PDF
    bitstream/item/48520/1/Bom-uso-corporativo-do-e-mail-1.pd

    What do your footsteps sound like? An investigation on interactive footstep sounds adjustment

    Get PDF
    This paper presents an experiment where participants were asked to adjust, while walking, the spectral content and the amplitude of synthetic footstep sounds in order to match the sounds of their own footsteps. The sounds were interactively generated by means of a shoe-based system capable of tracking footfalls and delivering real-time auditory feedback via headphones. Results allowed identification of the mean value and the range of variation of spectral centroid and peak level of footstep sounds simulating various combinations of shoe type and ground material. Results showed that the effect of ground material on centroid and peak level depended on the type of shoe. Similarly, the effect of shoe type on the two variables depended on the type of ground material. In particular, participants produced greater amplitudes for hard sole shoes than for soft sole shoes in presence of solid surfaces, while similar amplitudes for both types of shoes were found for aggregate, hybrids, and liquids. No significant correlations were found between each of the two acoustic features and participants’ body size. This result might be explained by the fact that while adjusting the sounds participants did not primarily focus on the acoustic rendering of their body. In addition, no significant differences were found between the values of the two acoustic features selected by the experimenters and those adjusted by participants. This result can therefore be considered as a measure of the goodness of the design choices to synthesize the involved footstep sounds for a generic walker. More importantly, this study showed that the relationships between the ground-shoes combinations are not changed when participants are actively walking. This represents the first active listening confirmation of this result, which had previously only been shown in passive listening studies. The results of this research can be used to design ecologically-valid auditory rendering of foot-floor interactions in virtual environments.This work was supported partly by a grant from the Danish Council for Independent Research awarded to Luca Turchet (Grant No. 12-131985), and partly by a grant from the ESRC awarded to Ana Tajadura-Jiménez (Grant No. ES/K001477/1)
    • …
    corecore